43 research outputs found
A Bernstein-Von Mises Theorem for discrete probability distributions
We investigate the asymptotic normality of the posterior distribution in the
discrete setting, when model dimension increases with sample size. We consider
a probability mass function on \mathbbm{N}\setminus \{0\} and a
sequence of truncation levels satisfying Let denote the maximum likelihood estimate of
and let denote the
-dimensional vector which -th coordinate is defined by \sqrt{n}
(\hat{\theta}_n(i)-\theta_0(i)) for We check that under mild
conditions on and on the sequence of prior probabilities on the
-dimensional simplices, after centering and rescaling, the variation
distance between the posterior distribution recentered around
and rescaled by and the -dimensional Gaussian distribution
converges in probability to
This theorem can be used to prove the asymptotic normality of Bayesian
estimators of Shannon and R\'{e}nyi entropies. The proofs are based on
concentration inequalities for centered and non-centered Chi-square (Pearson)
statistics. The latter allow to establish posterior concentration rates with
respect to Fisher distance rather than with respect to the Hellinger distance
as it is commonplace in non-parametric Bayesian statistics.Comment: Published in at http://dx.doi.org/10.1214/08-EJS262 the Electronic
Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Speed of propagation for Hamilton-Jacobi equations with multiplicative rough time dependence and convex Hamiltonians
We show that the initial value problem for Hamilton-Jacobi equations with
multiplicative rough time dependence, typically stochastic, and convex
Hamiltonians satisfies finite speed of propagation. We prove that in general
the range of dependence is bounded by a multiple of the length of the
"skeleton" of the path, that is a piecewise linear path obtained by connecting
the successive extrema of the original one. When the driving path is a Brownian
motion, we prove that its skeleton has almost surely finite length. We also
discuss the optimality of the estimate
Eikonal equations and pathwise solutions to fully non-linear SPDEs
We study the existence and uniqueness of the stochastic viscosity solutions
of fully nonlinear, possibly degenerate, second order stochastic pde with
quadratic Hamiltonians associated to a Riemannian geometry. The results are new
and extend the class of equations studied so far by the last two authors
Long-time behaviour of stochastic Hamilton-Jacobi equations
The long-time behavior of stochastic Hamilton-Jacobi equations is analyzed,
including the stochastic mean curvature flow as a special case. In a variety of
settings, new and sharpened results are obtained. Among them are (i) a
regularization by noise phenomenon for the mean curvature flow with homogeneous
noise which establishes that the inclusion of noise speeds up the decay of
solutions, and (ii) the long-time convergence of solutions to spatially
inhomogeneous stochastic Hamilton-Jacobi equations. A number of motivating
examples about nonlinear stochastic partial differential equations are
presented in the appendix
Order Estimation and Model Selection
reason why source coding concepts and techniques have become a standard tool in the area. This chapter presents four kinds of results: a rst very general consistency result in a Bayesian setting provides hints about the ideal penalties that could be used in penalized maximum likelihood order estimation. Then we provide a general construction for strongly consistent order estimators based on universal coding arguments. The third main result reports a recent tour de force by Csiszar and Shields (2000) who show that the Bayesian Information Criterion provides a strongly consistent Markov order estimator. We conclude by presenting a general framework for analyzing the Bahadur eciency of order estimation procedures following the line Gassiat and Boucheron (to appear). LRI UMR 8623 CNRS, Universite Paris-Sud Mathematiques, Universite Paris-Sud 2.1 Model Order Identi cation: what is it about ? In the preceding chapters, we have been concerned with inference problems in HMMs where t
Inference in finite state space non parametric hidden markov models and applications
Hidden Markov models (HMMs) are intensively used in various fields to model and classify data observed along a line (e.g. time). The fit of such models strongly relies on the choice of emission distributions that are most often chosen among some parametric family. In this paper, we prove that finite state space non parametric HMMs are identifiable as soon as the transition matrix of the latent Markov chain has full rank and the emission probability distributions are linearly independent. This general result allows the use of semi- or non-parametric emission distributions. Based on this result we present a series of classification problems that can be tackled out of the strict parametric framework. We derive the corresponding inference algorithms. We also illustrate their use on few biological examples, showing that they may improve the classification performances